171 research outputs found
Online Bipartite Matching with Advice: Tight Robustness-Consistency Tradeoffs for the Two-Stage Model
We study the two-stage vertex-weighted online bipartite matching problem of
Feng, Niazadeh, and Saberi (SODA 2021) in a setting where the algorithm has
access to a suggested matching that is recommended in the first stage. We
evaluate an algorithm by its robustness , which is its performance relative
to that of the optimal offline matching, and its consistency , which is its
performance when the advice or the prediction given is correct. We characterize
for this problem the Pareto-efficient frontier between robustness and
consistency, which is rare in the literature on advice-augmented algorithms,
yet necessary for quantifying such an algorithm to be optimal. Specifically, we
propose an algorithm that is -robust and -consistent for any with
and , and prove that
no other algorithm can achieve a better tradeoff
Improved Analysis of RANKING for Online Vertex-Weighted Bipartite Matching
In this paper, we consider the online vertex-weighted bipartite matching
problem in the random arrival model. We consider the generalization of the
RANKING algorithm for this problem introduced by Huang, Tang, Wu, and Zhang
(TALG 2019), who show that their algorithm has a competitive ratio of 0.6534.
We show that assumptions in their analysis can be weakened, allowing us to
replace their derivation of a crucial function on the unit square with a
linear program that computes the values of a best possible under these
assumptions on a discretized unit square. We show that the discretization does
not incur much error, and show computationally that we can obtain a competitive
ratio of 0.6629. To compute the bound over our discretized unit square we use
parallelization, and still needed two days of computing on a 64-core machine.
Furthermore, by modifying our linear program somewhat, we can show
computationally an upper bound on our approach of 0.6688; any further progress
beyond this bound will require either further weakening in the assumptions of
or a stronger analysis than that of Huang et al.Comment: 23 pages, 7 figure
High Probability Complexity Bounds for Line Search Based on Stochastic Oracles
We consider a line-search method for continuous optimization under a
stochastic setting where the function values and gradients are available only
through inexact probabilistic zeroth and first-order oracles. These oracles
capture multiple standard settings including expected loss minimization and
zeroth-order optimization. Moreover, our framework is very general and allows
the function and gradient estimates to be biased. The proposed algorithm is
simple to describe, easy to implement, and uses these oracles in a similar way
as the standard deterministic line search uses exact function and gradient
values. Under fairly general conditions on the oracles, we derive a high
probability tail bound on the iteration complexity of the algorithm when
applied to non-convex smooth functions. These results are stronger than those
for other existing stochastic line search methods and apply in more general
settings
Sample Complexity Analysis for Adaptive Optimization Algorithms with Stochastic Oracles
Several classical adaptive optimization algorithms, such as line search and
trust region methods, have been recently extended to stochastic settings where
function values, gradients, and Hessians in some cases, are estimated via
stochastic oracles. Unlike the majority of stochastic methods, these methods do
not use a pre-specified sequence of step size parameters, but adapt the step
size parameter according to the estimated progress of the algorithm and use it
to dictate the accuracy required from the stochastic approximations. The
requirements on stochastic approximations are, thus, also adaptive and the
oracle costs can vary from iteration to iteration. The step size parameters in
these methods can increase and decrease based on the perceived progress, but
unlike the deterministic case they are not bounded away from zero due to
possible oracle failures, and bounds on the step size parameter have not been
previously derived. This creates obstacles in the total complexity analysis of
such methods, because the oracle costs are typically decreasing in the step
size parameter, and could be arbitrarily large as the step size parameter goes
to 0. Thus, until now only the total iteration complexity of these methods has
been analyzed. In this paper, we derive a lower bound on the step size
parameter that holds with high probability for a large class of adaptive
stochastic methods. We then use this lower bound to derive a framework for
analyzing the expected and high probability total oracle complexity of any
method in this class. Finally, we apply this framework to analyze the total
sample complexity of two particular algorithms, STORM and SASS, in the expected
risk minimization problem
A 4/3-Approximation Algorithm for Half-Integral Cycle Cut Instances of the TSP
A long-standing conjecture for the traveling salesman problem (TSP) states
that the integrality gap of the standard linear programming relaxation of the
TSP is at most 4/3. Despite significant efforts, the conjecture remains open.
We consider the half-integral case, in which the LP has solution values in
. Such instances have been conjectured to be the most difficult
instances for the overall four-thirds conjecture. Karlin, Klein, and Oveis
Gharan, in a breakthrough result, were able to show that in the half-integral
case, the integrality gap is at most 1.49993. This result led to the first
significant progress on the overall conjecture in decades; the same authors
showed the integrality gap is at most in the non-half-integral
case. For the half-integral case, the current best-known ratio is 1.4983, a
result by Gupta et al.
With the improvements on the 3/2 bound remaining very incremental even in the
half-integral case, we turn the question around and look for a large class of
half-integral instances for which we can prove that the 4/3 conjecture is
correct.
The previous works on the half-integral case perform induction on a hierarchy
of critical tight sets in the support graph of the LP solution, in which some
of the sets correspond to "cycle cuts" and the others to "degree cuts". We show
that if all the sets in the hierarchy correspond to cycle cuts, then we can
find a distribution of tours whose expected cost is at most 4/3 times the value
of the half-integral LP solution; sampling from the distribution gives us a
randomized 4/3-approximation algorithm. We note that the known bad cases for
the integrality gap have a gap of 4/3 and have a half-integral LP solution in
which all the critical tight sets in the hierarchy are cycle cuts; thus our
result is tight.Comment: Comments, questions, and suggestions are welcome
A Combinatorial Cut-Toggling Algorithm for Solving Laplacian Linear Systems
Over the last two decades, a significant line of work in theoretical algorithms has made progress in solving linear systems of the form ?? = ?, where ? is the Laplacian matrix of a weighted graph with weights w(i,j) > 0 on the edges. The solution ? of the linear system can be interpreted as the potentials of an electrical flow in which the resistance on edge (i,j) is 1/w(i,j). Kelner, Orrechia, Sidford, and Zhu [Kelner et al., 2013] give a combinatorial, near-linear time algorithm that maintains the Kirchoff Current Law, and gradually enforces the Kirchoff Potential Law by updating flows around cycles (cycle toggling).
In this paper, we consider a dual version of the algorithm that maintains the Kirchoff Potential Law, and gradually enforces the Kirchoff Current Law by cut toggling: each iteration updates all potentials on one side of a fundamental cut of a spanning tree by the same amount. We prove that this dual algorithm also runs in a near-linear number of iterations.
We show, however, that if we abstract cut toggling as a natural data structure problem, this problem can be reduced to the online vector-matrix-vector problem (OMv), which has been conjectured to be difficult for dynamic algorithms [Henzinger et al., 2015]. The conjecture implies that the data structure does not have an O(n^{1-?}) time algorithm for any ? > 0, and thus a straightforward implementation of the cut-toggling algorithm requires essentially linear time per iteration.
To circumvent the lower bound, we batch update steps, and perform them simultaneously instead of sequentially. An appropriate choice of batching leads to an O?(m^{1.5}) time cut-toggling algorithm for solving Laplacian systems. Furthermore, we show that if we sparsify the graph and call our algorithm recursively on the Laplacian system implied by batching and sparsifying, we can reduce the running time to O(m^{1 + ?}) for any ? > 0. Thus, the dual cut-toggling algorithm can achieve (almost) the same running time as its primal cycle-toggling counterpart
Proportionally Fair Online Allocation of Public Goods with Predictions
We design online algorithms for the fair allocation of public goods to a set
of agents over a sequence of rounds and focus on improving their
performance using predictions. In the basic model, a public good arrives in
each round, the algorithm learns every agent's value for the good, and must
irrevocably decide the amount of investment in the good without exceeding a
total budget of across all rounds. The algorithm can utilize (potentially
inaccurate) predictions of each agent's total value for all the goods to
arrive. We measure the performance of the algorithm using a proportional
fairness objective, which informally demands that every group of agents be
rewarded in proportion to its size and the cohesiveness of its preferences.
In the special case of binary agent preferences and a unit budget, we show
that proportional fairness can be achieved without using any
predictions, and that this is optimal even if perfectly accurate predictions
were available. However, for general preferences and budget no algorithm can
achieve better than proportional fairness without predictions. We
show that algorithms with (reasonably accurate) predictions can do much better,
achieving proportional fairness. We also extend this
result to a general model in which a batch of public goods arrive in each
round and achieve proportional fairness. Our
exact bounds are parametrized as a function of the error in the predictions and
the performance degrades gracefully with increasing errors
Conduction of Ultracold Fermions Through a Mesoscopic Channel
In a mesoscopic conductor electric resistance is detected even if the device
is defect-free. We engineer and study a cold-atom analog of a mesoscopic
conductor. It consists of a narrow channel connecting two macroscopic
reservoirs of fermions that can be switched from ballistic to diffusive. We
induce a current through the channel and find ohmic conduction, even for a
ballistic channel. An analysis of in-situ density distributions shows that in
the ballistic case the chemical potential drop occurs at the entrance and exit
of the channel, revealing the presence of contact resistance. In contrast, a
diffusive channel with disorder displays a chemical potential drop spread over
the whole channel. Our approach opens the way towards quantum simulation of
mesoscopic devices with quantum gases
- …